尽管最近计算机愿景取得了成功,但仍有新的途径探索。在这项工作中,我们提出了一个新的数据集来调查自动阻塞对深神经网络的影响。随着TEOS(自动阻塞的效果),我们提出了一个3D阻挡世界数据集,专注于3D对象的几何形状及其对自动阻塞的全能挑战。我们设计了TEOS,以调查自动阻塞在对象分类的范围内的作用。尽管对象分类中已经出现了显着进展,但自遮挡是一个挑战。在现实世界中,3D对象的自我遮挡仍然具有深入学习方法的重大挑战。然而,人类通过部署复杂策略来处理这一点,例如,通过更改观点或操纵场景来收集必要的信息。使用TEOS,我们分别介绍了两个难度级别(L1和L2),包含36和12个对象的数据集。我们提供738个均匀采样的每个物体采样视图,它们的掩模,对象和相机位置,方向,自遮挡量以及每个物体的CAD模型。我们提出了与五个知名分类的深度神经网络的基线评估,并表明TEOS对所有人构成了重大挑战。数据集以及预先训练的型号,在HTTPS://nvision2.data.eecs.yorku.ca/eyos下公开可用于科学界。
translated by 谷歌翻译
AASM准则是为了有一种常用的方法,旨在标准化睡眠评分程序的数十年努力的结果。该指南涵盖了从技术/数字规格(例如,推荐的EEG推导)到相应的详细睡眠评分规则到年龄的几个方面。在睡眠评分自动化的背景下,与许多其他技术相比,深度学习表现出更好的性能。通常,临床专业知识和官方准则对于支持自动睡眠评分算法在解决任务时至关重要。在本文中,我们表明,基于深度学习的睡眠评分算法可能不需要充分利用临床知识或严格遵循AASM准则。具体而言,我们证明了U-Sleep是一种最先进的睡眠评分算法,即使使用临床非申请或非规定派生,也可以解决得分任务,即使无需利用有关有关的信息,也无需利用有关有关的信息。受试者的年代年龄。我们最终加强了一个众所周知的发现,即使用来自多个数据中心的数据始终导致与单个队列上的培训相比,可以使性能更好。确实,我们表明,即使增加了单个数据队列的大小和异质性,后者仍然有效。在我们的所有实验中,我们使用了来自13个不同临床研究的28528多个多摄影研究研究。
translated by 谷歌翻译
机器学习最近被出现为研究复杂现象的有希望的方法,其特征是丰富的数据集。特别地,以数据为中心的方法为手动检查可能错过的实验数据集中自动发现结构的可能性。在这里,我们介绍可解释的无监督监督的混合机学习方法,混合相关卷积神经网络(Hybrid-CCNN),并将其应用于使用基于Rydberg Atom阵列的可编程量子模拟器产生的实验数据。具体地,我们应用Hybrid-CCNN以通过可编程相互作用分析在方形格子上的新量子阶段。初始无监督的维度降低和聚类阶段首先揭示了五个不同的量子相位区域。在第二个监督阶段,我们通过培训完全解释的CCNN来细化这些相界并通过训练每个阶段提取相关的相关性。在条纹相中的每个相捕获量子波动中专门识别的特征空间加权和相关的相关性并鉴定两个先前未检测到的相,菱形和边界有序相位。这些观察结果表明,具有机器学习的可编程量子模拟器的组合可用作有关相关量子态的详细探索的强大工具。
translated by 谷歌翻译
尽管深度学习的卓越性能(DL)在许多分割任务上,但基于DL的方法令人惊奇地过于对高偏振标签概率的预测。对于许多具有固有标签歧义的许多应用通常是不可取的,即使在人类注释中也是如此。通过利用每张图片的多个注释和分割不确定性来解决这一挑战。但是,多次图像的批次通常不可用,在真实的应用程序中,不确定性在分段结果对用户的情况下不提供完全控制。在本文中,我们提出了新的方法来改善分割概率估计,而不会在真实情景中牺牲性能,我们只有每张图片只有一个暧昧的注释。我们将估计的网络分割概率图边缘化,这是鼓励/过度的网络上/过度段,而没有惩罚平衡分割。此外,我们提出了一个统一的HyperNetwork合奏方法,以减轻培训多个网络的计算负担。我们的方法成功地估计了反映了底层结构的分割概率图,并为具有挑战性的3D医学图像分割进行了直观控制。虽然我们所提出的方法的主要重点不是提高二元分割性能,但我们的方法略微超越了最先进的。该代码可用于\ url {https://github.com/sh4174/hypernetensemble}。
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译
The latent space of autoencoders has been improved for clustering image data by jointly learning a t-distributed embedding with a clustering algorithm inspired by the neighborhood embedding concept proposed for data visualization. However, multivariate tabular data pose different challenges in representation learning than image data, where traditional machine learning is often superior to deep tabular data learning. In this paper, we address the challenges of learning tabular data in contrast to image data and present a novel Gaussian Cluster Embedding in Autoencoder Latent Space (G-CEALS) algorithm by replacing t-distributions with multivariate Gaussian clusters. Unlike current methods, the proposed approach independently defines the Gaussian embedding and the target cluster distribution to accommodate any clustering algorithm in representation learning. A trained G-CEALS model extracts a quality embedding for unseen test data. Based on the embedding clustering accuracy, the average rank of the proposed G-CEALS method is 1.4 (0.7), which is superior to all eight baseline clustering and cluster embedding methods on seven tabular data sets. This paper shows one of the first algorithms to jointly learn embedding and clustering to improve multivariate tabular data representation in downstream clustering.
translated by 谷歌翻译
An unbiased scene graph generation (SGG) algorithm referred to as Skew Class-balanced Re-weighting (SCR) is proposed for considering the unbiased predicate prediction caused by the long-tailed distribution. The prior works focus mainly on alleviating the deteriorating performances of the minority predicate predictions, showing drastic dropping recall scores, i.e., losing the majority predicate performances. It has not yet correctly analyzed the trade-off between majority and minority predicate performances in the limited SGG datasets. In this paper, to alleviate the issue, the Skew Class-balanced Re-weighting (SCR) loss function is considered for the unbiased SGG models. Leveraged by the skewness of biased predicate predictions, the SCR estimates the target predicate weight coefficient and then re-weights more to the biased predicates for better trading-off between the majority predicates and the minority ones. Extensive experiments conducted on the standard Visual Genome dataset and Open Image V4 \& V6 show the performances and generality of the SCR with the traditional SGG models.
translated by 谷歌翻译
In this paper we discuss the theory used in the design of an open source lightmorphic signatures analysis toolkit (LSAT). In addition to providing a core functionality, the software package enables specific optimizations with its modular and customizable design. To promote its usage and inspire future contributions, LSAT is publicly available. By using a self-supervised neural network and augmented machine learning algorithms, LSAT provides an easy-to-use interface with ample documentation. The experiments demonstrate that LSAT improves the otherwise tedious and error-prone tasks of translating lightmorphic associated data into usable spectrograms, enhanced with parameter tuning and performance analysis. With the provided mathematical functions, LSAT validates the nonlinearity encountered in the data conversion process while ensuring suitability of the forecasting algorithms.
translated by 谷歌翻译
Detecting abrupt changes in data distribution is one of the most significant tasks in streaming data analysis. Although many unsupervised Change-Point Detection (CPD) methods have been proposed recently to identify those changes, they still suffer from missing subtle changes, poor scalability, or/and sensitive to noise points. To meet these challenges, we are the first to generalise the CPD problem as a special case of the Change-Interval Detection (CID) problem. Then we propose a CID method, named iCID, based on a recent Isolation Distributional Kernel (IDK). iCID identifies the change interval if there is a high dissimilarity score between two non-homogeneous temporal adjacent intervals. The data-dependent property and finite feature map of IDK enabled iCID to efficiently identify various types of change points in data streams with the tolerance of noise points. Moreover, the proposed online and offline versions of iCID have the ability to optimise key parameter settings. The effectiveness and efficiency of iCID have been systematically verified on both synthetic and real-world datasets.
translated by 谷歌翻译